Goto

Collaborating Authors

 ratio function


Distributional Evaluation of Generative Models via Relative Density Ratio

Xu, Yuliang, Wei, Yun, Ma, Li

arXiv.org Machine Learning

We propose a function-valued evaluation metric for generative models based on the relative density ratio (RDR) designed to characterize distributional differences between real and generated samples. As an evaluation metric, the RDR function preserves $ϕ$-divergence between two distributions, enables sample-level evaluation that facilitates downstream investigations of feature-specific distributional differences, and has a bounded range that affords clear interpretability and numerical stability. Function estimation of the RDR is achieved efficiently through optimization on the variational form of $ϕ$-divergence. We provide theoretical convergence rate guarantees for general estimators based on M-estimator theory, as well as the convergence rate of neural network-based estimators when the true ratio is in the anisotropic Besov space. We demonstrate the power of the proposed RDR-based evaluation through numerical experiments on MNIST, CelebA64, and the American Gut project microbiome data. We show that the estimated RDR enables not only effective overall comparison of competing generative models, but also a convenient way to reveal the underlying nature of goodness-of-fit. This enables one to assess support overlap, coverage, and fidelity while pinpointing regions of the sample space where generators concentrate and revealing the features that drive the most salient distributional differences.


The $f$-divergence and Loss Functions in ROC Curve

Liu, Song

arXiv.org Machine Learning

Given two data distributions and a test score function, the Receiver Operating Characteristic (ROC) curve shows how well such a score separates two distributions. However, can the ROC curve be used as a measure of discrepancy between two distributions? This paper shows that when the data likelihood ratio is used as the test score, the arc length of the ROC curve gives rise to a novel $f$-divergence measuring the differences between two data distributions. Approximating this arc length using a variational objective and empirical samples leads to empirical risk minimization with previously unknown loss functions. We provide a Lagrangian dual objective and introduce kernel models into the estimation problem. We study the non-parametric convergence rate of this estimator and show under mild smoothness conditions of the real arctangent density ratio function, the rate of convergence is $O_p(n^{-\beta/4})$ ($\beta \in (0,1]$ depends on the smoothness).


Batch Policy Learning in Average Reward Markov Decision Processes

Liao, Peng, Qi, Zhengling, Murphy, Susan

arXiv.org Machine Learning

We study the problem of policy optimization in Markov Decision Process over infinite time horizons (Puterman, 1994). We focus on the batch (i.e., off-line) setting, where historical data of multiple trajectories has been previously collected using some behavior policy. Our goal is to learn a new policy with guaranteed performance when implemented in the future. In this work, we develop a data-efficient method to learn the policy that optimizes the long-term average reward in a pre-specified policy class from a training set composed of multiple trajectories. Furthermore, we establish a finite-sample regret guarantee, i.e., the difference between the average reward of the optimal policy in the class and the average reward of the estimated policy by our proposed method. This work is motivated by the development of justin-time adaptive intervention in mobile health (mHealth) applications (Nahum-Shani et al., 2017). Our method can be used to learn a treatment policy that maps the real-time collected information about the individual's status and context to a particular treatment at each of many decision times to support health behaviors.


Posterior Ratio Estimation for Latent Variables

Zhang, Yulong, Yi, Mingxuan, Liu, Song, Kolar, Mladen

arXiv.org Machine Learning

Comparing the underlying distributions of two given datasets has been an important task in machine learning community and has a wide range of applications. For example, change detection algorithms Kawahara and Sugiyama ((2012)) compare datasets collected at different time points and report how the underlying distribution has shifted over time; Transfer learning algorithms Quionero-Candela et al. ((2009)) utilize the estimated differences between two datasets to efficiently share information between different tasks. Generative Adversarial Net (GAN) Goodfellow et al. ((2014)) learns an implicit generative model whose output minimizes the differences between an artificial dataset and a real dataset. Various computational methods have been proposed for comparing underlying distributions given two sets of observations. For example, Maximum Mean Discrepancy (MMD) Gretton et al. ((2012)) computes the distance between the kernel mean embeddings of two datasets in Reproducing Kernel Hilbert Space (RKHS).


Training Neural Networks for Likelihood/Density Ratio Estimation

Moustakides, George V., Basioti, Kalliopi

arXiv.org Machine Learning

V arious problems in Engineering and Statistics require the computation of the likelihood ratio function of two probability densities. In classical approaches the two densities are assumed known or to belong to some known parametric family. In a data-driven version we replace this requirement with the availability of data sampled from the densities of interest. For most well known problems in Detection and Hypothesis testing we develop solutions by providing neural network based estimates of the likelihood ratio or its transformations. This task necessitates the definition of proper optimizations which can be used for the training of the network. The main purpose of this work is to offer a simple and unified methodology for defining such optimization problems with guarantees that the solution is indeed the desired function. Our results are extended to cover estimates for likelihood ratios of conditional densities and estimates for statistics encountered in local approaches. HE likelihood ratio of two probability densities is a function that appears in a variety of problems in Engineering and Statistics. Characteristic examples [1], [2] constitute Hypothesis testing, Signal detection, Sequential hypothesis testing, Sequential detection of changes, etc. Many of these problems also use the likelihood ratio under a transformed form with the most frequent example being the log-likelihood ratio. In all these problems the main assumption is that the corresponding probability densities are available under some functional form. What we aim in this work is to replace this requirement with the availability of data sampled from each of the densities of interest. As we mentioned, the computation of the likelihood ratio function relies on the knowledge of the probability densities which, for the majority of applications, is an unrealistic assumption. One can instead propose parametric families of densities and, with the help of available data, estimate the parameters and form the likelihood ratio function. However, with the advent of Data Science and Deep Learning there is a phenomenal increase in need for processing data coming from images, videos etc. For most of these cases it is very difficult to propose any meaningful parametric family of densities that could reliably describe their statistical behavior. Therefore, these techniques tend to be unsuitable for most of these datasets. If parametric families cannot be employed one can always resort to nonparametric density estimation [3] and then form the likelihood ratio. These approaches are purely data-driven but require two different approximations, namely one for each density.


Model Inference with Stein Density Ratio Estimation

Liu, Song, Jitkrittum, Wittawat, Ek, Carl Henrik

arXiv.org Machine Learning

The Kullback-Leilber divergence from model to data is a classic goodness of fit measure but can be intractable in many cases. In this paper, we estimate the ratio function between a data density and a model density with the help of Stein operator. The estimated density ratio allows us to compute the likelihood ratio function which is a surrogate to the actual Kullback-Leibler divergence from model to data. By minimizing this surrogate, we can perform model fitting and inference from either frequentist or Bayesian point of view. This paper discusses methods, theories and algorithms for performing such tasks. Our theoretical claims are verified by experiments and examples are given demonstrating the usefulness of our methods.


Wald-Kernel: Learning to Aggregate Information for Sequential Inference

Teng, Diyan, Ertin, Emre

arXiv.org Machine Learning

Sequential hypothesis testing is a desirable decision making strategy in any time sensitive scenario. Compared with fixed sample-size testing, sequential testing is capable of achieving identical probability of error requirements using less samples in average. For a binary detection problem, it is well known that for known density functions accumulating the likelihood ratio statistics is time optimal under a fixed error rate constraint. This paper considers the problem of learning a binary sequential detector from training samples when density functions are unavailable. We formulate the problem as a constrained likelihood ratio estimation which can be solved efficiently through convex optimization by imposing Reproducing Kernel Hilbert Space (RKHS) structure on the log-likelihood ratio function. In addition, we provide a computationally efficient approximated solution for large scale data set. The proposed algorithm, namely Wald-Kernel, is tested on a synthetic data set and two real world data sets, together with previous approaches for likelihood ratio estimation. Our empirical results show that the classifier trained through the proposed technique achieves smaller average sampling cost than previous approaches proposed in the literature for the same error rate.


Interpreting Outliers: Localized Logistic Regression for Density Ratio Estimation

Yamada, Makoto, Liu, Song, Kaski, Samuel

arXiv.org Machine Learning

We propose an inlier-based outlier detection method capable of both identifying the outliers and explaining why they are outliers, by identifying the outlier-specific features. Specifically, we employ an inlier-based outlier detection criterion, which uses the ratio of inlier and test probability densities as a measure of plausibility of being an outlier. For estimating the density ratio function, we propose a localized logistic regression algorithm. Thanks to the locality of the model, variable selection can be outlier-specific, and will help interpret why points are outliers in a high-dimensional space. Through synthetic experiments, we show that the proposed algorithm can successfully detect the important features for outliers. Moreover, we show that the proposed algorithm tends to outperform existing algorithms in benchmark datasets.